6,784 research outputs found
A Quantized Johnson Lindenstrauss Lemma: The Finding of Buffon's Needle
In 1733, Georges-Louis Leclerc, Comte de Buffon in France, set the ground of
geometric probability theory by defining an enlightening problem: What is the
probability that a needle thrown randomly on a ground made of equispaced
parallel strips lies on two of them? In this work, we show that the solution to
this problem, and its generalization to dimensions, allows us to discover a
quantized form of the Johnson-Lindenstrauss (JL) Lemma, i.e., one that combines
a linear dimensionality reduction procedure with a uniform quantization of
precision . In particular, given a finite set of points and a distortion level , as soon as , we can (randomly) construct a mapping from
to that approximately
preserves the pairwise distances between the points of .
Interestingly, compared to the common JL Lemma, the mapping is quasi-isometric
and we observe both an additive and a multiplicative distortions on the
embedded distances. These two distortions, however, decay as when increases. Moreover, for coarse quantization, i.e., for high
compared to the set radius, the distortion is mainly additive, while
for small we tend to a Lipschitz isometric embedding. Finally, we
prove the existence of a "nearly" quasi-isometric embedding of into . This one involves a non-linear
distortion of the -distance in that vanishes for distant
points in this set. Noticeably, the additive distortion in this case is slower,
and decays as .Comment: 27 pages, 2 figures (note: this version corrects a few typos in the
abstract
A Short Note on Compressed Sensing with Partially Known Signal Support
This short note studies a variation of the Compressed Sensing paradigm
introduced recently by Vaswani et al., i.e. the recovery of sparse signals from
a certain number of linear measurements when the signal support is partially
known. The reconstruction method is based on a convex minimization program
coined "innovative Basis Pursuit DeNoise" (or iBPDN). Under the common
-fidelity constraint made on the available measurements, this
optimization promotes the () sparsity of the candidate signal over the
complement of this known part. In particular, this paper extends the results of
Vaswani et al. to the cases of compressible signals and noisy measurements. Our
proof relies on a small adaption of the results of Candes in 2008 for
characterizing the stability of the Basis Pursuit DeNoise (BPDN) program. We
emphasize also an interesting link between our method and the recent work of
Davenport et al. on the -stable embeddings and the
"cancel-then-recover" strategy applied to our problem. For both approaches,
reconstructions are indeed stabilized when the sensing matrix respects the
Restricted Isometry Property for the same sparsity order. We conclude by
sketching an easy numerical method relying on monotone operator splitting and
proximal methods that iteratively solves iBPDN
Small Width, Low Distortions: Quantized Random Embeddings of Low-complexity Sets
Under which conditions and with which distortions can we preserve the
pairwise-distances of low-complexity vectors, e.g., for structured sets such as
the set of sparse vectors or the one of low-rank matrices, when these are
mapped in a finite set of vectors? This work addresses this general question
through the specific use of a quantized and dithered random linear mapping
which combines, in the following order, a sub-Gaussian random projection in
of vectors in , a random translation, or "dither",
of the projected vectors and a uniform scalar quantizer of resolution
applied componentwise. Thanks to this quantized mapping we are first
able to show that, with high probability, an embedding of a bounded set
in can be achieved when
distances in the quantized and in the original domains are measured with the
- and -norm, respectively, and provided the number of quantized
observations is large before the square of the "Gaussian mean width" of
. In this case, we show that the embedding is actually
"quasi-isometric" and only suffers of both multiplicative and additive
distortions whose magnitudes decrease as for general sets, and as
for structured set, when increases. Second, when one is only
interested in characterizing the maximal distance separating two elements of
mapped to the same quantized vector, i.e., the "consistency width"
of the mapping, we show that for a similar number of measurements and with high
probability this width decays as for general sets and as for
structured ones when increases. Finally, as an important aspect of our
work, we also establish how the non-Gaussianity of the mapping impacts the
class of vectors that can be embedded or whose consistency width provably
decays when increases.Comment: Keywords: quantization, restricted isometry property, compressed
sensing, dimensionality reduction. 31 pages, 1 figur
Quantized Compressive K-Means
The recent framework of compressive statistical learning aims at designing
tractable learning algorithms that use only a heavily compressed
representation-or sketch-of massive datasets. Compressive K-Means (CKM) is such
a method: it estimates the centroids of data clusters from pooled, non-linear,
random signatures of the learning examples. While this approach significantly
reduces computational time on very large datasets, its digital implementation
wastes acquisition resources because the learning examples are compressed only
after the sensing stage. The present work generalizes the sketching procedure
initially defined in Compressive K-Means to a large class of periodic
nonlinearities including hardware-friendly implementations that compressively
acquire entire datasets. This idea is exemplified in a Quantized Compressive
K-Means procedure, a variant of CKM that leverages 1-bit universal quantization
(i.e. retaining the least significant bit of a standard uniform quantizer) as
the periodic sketch nonlinearity. Trading for this resource-efficient signature
(standard in most acquisition schemes) has almost no impact on the clustering
performances, as illustrated by numerical experiments
Robust Phase Unwrapping by Convex Optimization
The 2-D phase unwrapping problem aims at retrieving a "phase" image from its
modulo observations. Many applications, such as interferometry or
synthetic aperture radar imaging, are concerned by this problem since they
proceed by recording complex or modulated data from which a "wrapped" phase is
extracted. Although 1-D phase unwrapping is trivial, a challenge remains in
higher dimensions to overcome two common problems: noise and discontinuities in
the true phase image. In contrast to state-of-the-art techniques, this work
aims at simultaneously unwrap and denoise the phase image. We propose a robust
convex optimization approach that enforces data fidelity constraints expressed
in the corrupted phase derivative domain while promoting a sparse phase prior.
The resulting optimization problem is solved by the Chambolle-Pock primal-dual
scheme. We show that under different observation noise levels, our approach
compares favorably to those that perform the unwrapping and denoising in two
separate steps.Comment: 6 pages, 4 figures, submitted in ICIP1
Time for dithering: fast and quantized random embeddings via the restricted isometry property
Recently, many works have focused on the characterization of non-linear
dimensionality reduction methods obtained by quantizing linear embeddings,
e.g., to reach fast processing time, efficient data compression procedures,
novel geometry-preserving embeddings or to estimate the information/bits stored
in this reduced data representation. In this work, we prove that many linear
maps known to respect the restricted isometry property (RIP) can induce a
quantized random embedding with controllable multiplicative and additive
distortions with respect to the pairwise distances of the data points beings
considered. In other words, linear matrices having fast matrix-vector
multiplication algorithms (e.g., based on partial Fourier ensembles or on the
adjacency matrix of unbalanced expanders) can be readily used in the definition
of fast quantized embeddings with small distortions. This implication is made
possible by applying right after the linear map an additive and random "dither"
that stabilizes the impact of the uniform scalar quantization operator applied
afterwards. For different categories of RIP matrices, i.e., for different
linear embeddings of a metric space
in with , we derive upper bounds on the
additive distortion induced by quantization, showing that it decays either when
the embedding dimension increases or when the distance of a pair of
embedded vectors in decreases. Finally, we develop a novel
"bi-dithered" quantization scheme, which allows for a reduced distortion that
decreases when the embedding dimension grows and independently of the
considered pair of vectors.Comment: Keywords: random projections, non-linear embeddings, quantization,
dither, restricted isometry property, dimensionality reduction, compressive
sensing, low-complexity signal models, fast and structured sensing matrices,
quantized rank-one projections (31 pages
- …